Security issues are threatened in various types of networks, especially in the Internet of Things (IoT) environment that requires early detection. IoT is the network of real-time devices like home automation systems and can be controlled by open-source android devices, which can be an open ground for attackers. Attackers can access the network, initiate a different kind of security breach, and compromises network control. Therefore, timely detecting the increasing number of sophisticated malware attacks is the challenge to ensure the credibility of network protection. In this regard, we have developed a new malware detection framework, Deep Squeezed-Boosted and Ensemble Learning (DSBEL), comprised of novel Squeezed-Boosted Boundary-Region Split-Transform-Merge (SB-BR-STM) CNN and ensemble learning. The proposed S.T.M. block employs multi-path dilated convolutional, Boundary, and regional operations to capture the homogenous and heterogeneous global malicious patterns. Moreover, diverse feature maps are achieved using transfer learning and multi-path-based squeezing and boosting at initial and final levels to learn minute pattern variations. Finally, the boosted discriminative features are extracted from the developed deep SB-BR-STM CNN and provided to the ensemble classifiers (SVM, M.L.P., and AdaboostM1) to improve the hybrid learning generalization. The performance analysis of the proposed DSBEL framework and SB-BR-STM CNN against the existing techniques have been evaluated by the IOT_Malware dataset on standard performance measures. Evaluation results show progressive performance as 98.50% accuracy, 97.12% F1-Score, 91.91% MCC, 95.97 % Recall, and 98.42 % Precision. The proposed malware analysis framework is helpful for the timely detection of malicious activity and suggests future strategies.
translated by 谷歌翻译
Neural models that do not rely on pre-training have excelled in the keyphrase generation task with large annotated datasets. Meanwhile, new approaches have incorporated pre-trained language models (PLMs) for their data efficiency. However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models. To fill in this knowledge gap and facilitate a more informed use of PLMs for keyphrase extraction and keyphrase generation, we present an in-depth empirical study. Formulating keyphrase extraction as sequence labeling and keyphrase generation as sequence-to-sequence generation, we perform extensive experiments in three domains. After showing that PLMs have competitive high-resource performance and state-of-the-art low-resource performance, we investigate important design choices including in-domain PLMs, PLMs with different pre-training objectives, using PLMs with a parameter budget, and different formulations for present keyphrases. Further results show that (1) in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models; (2) with a fixed parameter budget, prioritizing model depth over width and allocating more layers in the encoder leads to better encoder-decoder models; and (3) introducing four in-domain PLMs, we achieve a competitive performance in the news domain and the state-of-the-art performance in the scientific domain.
translated by 谷歌翻译
Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We demonstrate that domain-specific pre-training offers performance improvements across all tasks. We release the benchmark to encourage future research in this domain.
translated by 谷歌翻译
While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i.e., in-file context, but ignore the rich semantics in other files within the same project, i.e., cross-file context, a critical source of information that is especially useful in modern modular software development. Such overlooking constrains code language models' capacity in code completion, leading to unexpected behaviors such as generating hallucinated class member functions or function calls with unexpected arguments. In this work, we develop a cross-file context finder tool, CCFINDER, that effectively locates and retrieves the most relevant cross-file context. We propose CoCoMIC, a framework that incorporates cross-file context to learn the in-file and cross-file context jointly on top of pretrained code LMs. CoCoMIC successfully improves the existing code LM with a 19.30% relative increase in exact match and a 15.41% relative increase in identifier matching for code completion when the cross-file context is provided.
translated by 谷歌翻译
随着丰富的视觉表示和预训练的语言模型的出现,随着时间的推移,视频字幕持续不断改进。尽管性能有所提高,但视频字幕模型还是容易发生幻觉的。幻觉是指与原始材料分离的高度病理描述的产生。在视频字幕中,有两种幻觉:物体和动作幻觉。我们没有努力学习视频的更好代表,而是在这项工作中研究了幻觉问题的基本来源。我们确定了三个主要因素:(i)从预训练模型中提取的视觉特征不足,(ii)多模式融合过程中源和目标环境的影响不当,以及(iii)训练策略中的暴露偏见。为了减轻这些问题,我们提出了两种强大的解决方案:(a)在提取的视觉特征的基础上引入了在多标签设置中训练的辅助头,以及(b)添加上下文门,在融合过程中动态选择特征。视频字幕的标准评估指标衡量与地面真相标题的相似性,并且不能充分捕获对象和动作相关性。为此,我们提出了一个新的指标Coaha(标题对象和动作幻觉评估),该指标评估了幻觉的程度。我们的方法可以在MSR-Video到文本(MSR-VTT)和Microsoft研究视频描述语料库(MSVD)数据集上实现最先进的性能,尤其是通过大量的苹果酒得分。
translated by 谷歌翻译
这项研究旨在开发一种新型的路灯管理系统,该系统由电视电视(CCTV)摄像头安装的计算机视觉技术提供动力,该摄像头允许发光二极管(LED)路灯通过识别行人或车辆的存在,从而自动通过适当的亮度点亮。并在视频中通过语义图像细分在缺席的情况下对路灯进行了颠倒。
translated by 谷歌翻译
机器学习(ML)模型,例如SVM,用于分类和序列的聚类等任务,需要定义序列对之间的距离/相似性。已经提出了几种方法来计算序列之间的相似性,例如确切的方法计算$ k $ -s-mers(长度$ k $的子序列)之间的匹配数和估计成对相似性得分的近似方法。尽管精确的方法产生了更好的分类性能,但它们的计算成本很高,将其适用性限制在少量序列中。事实证明,近似算法更可扩展,并具有相当的性能(有时更好)确切方法 - 它们以“一般”方式设计用于处理不同类型的序列(例如音乐,蛋白质等)。尽管一般适用性是算法的所需属性,但在所有情况下都不是这种情况。例如,在当前的Covid-19(冠状病毒)大流行中,需要一种可以专门处理冠状病毒的方法。为此,我们提出了一系列方法来提高近似内核的性能(使用最小化和信息增益),以增强其预测性能PM冠状病毒序列。更具体地说,我们使用域知识(使用信息增益计算)和有效的预处理(使用最小值计算)来提高近似内核的质量,以对与不同变体相对应的冠状病毒峰值蛋白序列进行分类(例如,Alpha,Beta,Beta,Gamma)。我们使用不同的分类和聚类算法报告结果,并使用多个评估指标评估其性能。使用两个数据集,我们表明我们提出的方法有助于与医疗保健领域的基线和最先进的方法相比,有助于提高内核的性能。
translated by 谷歌翻译
我们介绍了在Neurips'22接受的Chalearn Meta学习系列中的新挑战的设计和基线结果,重点是“跨域”元学习。元学习旨在利用从以前的任务中获得的经验,以有效地解决新任务(即具有更好的性能,较少的培训数据和/或适度的计算资源)。尽管该系列中的先前挑战集中在域内几乎没有学习问题,但目的是有效地学习n-way K-shot任务(即N级培训示例的N班级分类问题),这项竞赛挑战了参与者的解决方案。从各种领域(医疗保健,生态学,生物学,制造业等)提出的“任何通道”和“任何镜头”问题,他们是为了人道主义和社会影响而被选为。为此,我们创建了Meta-Album,这是来自10个域的40个图像分类数据集的元数据,从中,我们从中以任何数量的“方式”(在2-20范围内)和任何数量的“镜头”来解释任务”(在1-20范围内)。竞争是由代码提交的,在Codalab挑战平台上进行了完全盲目测试。获奖者的代码将是开源的,从而使自动化机器学习解决方案的部署可以在几个域中进行几次图像分类。
translated by 谷歌翻译
人类活动识别是计算机视觉中的新出现和重要领域,旨在确定个体或个体正在执行的活动。该领域的应用包括从体育中生成重点视频到智能监视和手势识别。大多数活动识别系统依赖于卷积神经网络(CNN)的组合来从数据和复发性神经网络(RNN)中进行特征提取来确定数据的时间依赖性。本文提出并设计了两个用于人类活动识别的变压器神经网络:一个经常性变压器(RET),这是一个专门的神经网络,用于对数据序列进行预测,以及视觉变压器(VIT),一种用于提取显着的变压器的变压器(VIT)图像的特征,以提高活动识别的速度和可扩展性。我们在速度和准确性方面提供了对拟议的变压器神经网络与现代CNN和基于RNN的人类活动识别模型的广泛比较。
translated by 谷歌翻译
基于视觉的人类活动识别已成为视频分析领域的重要研究领域之一。在过去的十年中,已经引入了许多先进的深度学习算法,以识别视频流中复杂的人类行为。这些深度学习算法对人类活动识别任务显示出令人印象深刻的表现。但是,这些新引入的方法仅专注于模型性能或这些模型在计算效率和鲁棒性方面的有效性,从而导致其解决挑战性人类活动识别问题的提议中的偏差折衷。为了克服当代深度学习模型对人类活动识别的局限性,本文提出了一个计算高效但通用的空间级联框架,该框架利用了深层歧视性的空间和时间特征,以识别人类活动的识别。为了有效地表示人类行动,我们提出了有效的双重注意卷积神经网络(CNN)体系结构,该结构利用统一的通道空间注意机制来提取视频框架中以人为中心的显着特征。双通道空间注意力层与卷积层一起学会在具有特征图数量的物体的空间接收场中更加专注。然后将提取的判别显着特征转发到堆叠的双向封闭式复发单元(BI-GRU),以使用前进和后传球梯度学习,以实现长期时间建模和对人类行为的识别。进行了广泛的实验,其中获得的结果表明,与大多数当代动作识别方法相比,所提出的框架的执行时间的改善最高167倍。
translated by 谷歌翻译